936 research outputs found

    Developing a theoretical model and questionnaire survey instrument to measure the success of electronic health records in residential aged care

    Get PDF
    Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variablesÐtraining, self-efficacy, system quality and information qualityÐon the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time

    A Multi-cut Formulation for Joint Segmentation and Tracking of Multiple Objects

    Full text link
    Recently, Minimum Cost Multicut Formulations have been proposed and proven to be successful in both motion trajectory segmentation and multi-target tracking scenarios. Both tasks benefit from decomposing a graphical model into an optimal number of connected components based on attractive and repulsive pairwise terms. The two tasks are formulated on different levels of granularity and, accordingly, leverage mostly local information for motion segmentation and mostly high-level information for multi-target tracking. In this paper we argue that point trajectories and their local relationships can contribute to the high-level task of multi-target tracking and also argue that high-level cues from object detection and tracking are helpful to solve motion segmentation. We propose a joint graphical model for point trajectories and object detections whose Multicuts are solutions to motion segmentation {\it and} multi-target tracking problems at once. Results on the FBMS59 motion segmentation benchmark as well as on pedestrian tracking sequences from the 2D MOT 2015 benchmark demonstrate the promise of this joint approach

    Learning to Predict the Cosmological Structure Formation

    Get PDF
    Matter evolved under influence of gravity from minuscule density fluctuations. Non-perturbative structure formed hierarchically over all scales, and developed non-Gaussian features in the Universe, known as the Cosmic Web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and employ a large ensemble of computer simulations to compare with the observed data in order to extract the full information of our own Universe. However, to evolve trillions of galaxies over billions of years even with the simplest physics is a daunting task. We build a deep neural network, the Deep Density Displacement Model (hereafter D3^3M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory. Our extensive analysis, demonstrates that D3^3M outperforms the second order perturbation theory (hereafter 2LPT), the commonly used fast approximate simulation method, in point-wise comparison, 2-point correlation, and 3-point correlation. We also show that D3^3M is able to accurately extrapolate far beyond its training data, and predict structure formation for significantly different cosmological parameters. Our study proves, for the first time, that deep learning is a practical and accurate alternative to approximate simulations of the gravitational structure formation of the Universe.Comment: 8 pages, 5 figures, 1 tabl

    A new approach to the study on counterexamples of generic sentences: From the perspective of interactive reference point-target relationship and re-categorization model

    Get PDF
    Based on deficiencies of existing researches, this paper, aiming at taking the tolerance of counterexamples reflecting seeming syntax-semantic mismatch in generic sentences, and the online cognitive process of these sentences into the same analyzing framework, proposes the Interactive Reference Point-target Relationship and Re-categorization Model (IRPR-RC Model) to give a unified explanation to the main types of counterexample-tolerating generic sentences (GS), thus further fulfilling the generalization commitment of cognitive linguistics. According to this model: 1) there is an interaction relationship between reference points and targets connecting generic words and attribute words in counterexample-tolerating generic sentences (GS); 2) this interactive relationship provides the premise for re-categorization, which selects a particular sub-category and makes it salient. This process can also be viewed as a phenomenon of attribute words coercing the generic words; 3) the model can be divided into three types: Focusing Type, Imbedding Type and Repulsing Type, according to different operation mechanism of IRPR-RC Model in counterexample-tolerating generic sentences (GS)

    Coastal Disasters and Remote Sensing Monitoring Methods

    Get PDF
    Coastal disaster is abnormal changes caused by climate change, human activities, geological movement or natural environment changes. According to formation cause, marine disasters as storm surges, waves, Tsunami coastal erosion, sea-level rise, red tide, seawater intrusion, marine oil spill and soil salinization. Remote sensing technology has real-time and large-area advantages in promoting the monitoring and forecast ability of coastal disaster. Relative to natural disasters, ones caused by human factors are more likely to be monitored and prevented. In this paper, we use several remote sensing methods to monitor or forecast three kinds of coastal disaster cause by human factors including red tide, sea-level rise and oil spilling, and make proposals for infrastructure based on the research results. The chosen method of monitoring red tide by inversing chlorophyll-a concentration is improved OC3M Model, which is more suitable for the coastal zone and higher spatial resolution than the MODIS chlorophyll-a production. We monitor the sea-level rise in coastal zone through coastline changes without artificial modifications. The improved Lagrangian model can simulate the trajectory of oil slick efficiently. Making the infrastructure planning according the coastal disasters and features of coastline contributes to prevent coastal disaster and coastal ecosystem protection. Multi-source remote sensing data can effectively monitor and prevent coastal disaster, and provide planning advices for coastal infrastructure construction
    corecore